Model-based reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best model-free algorithms in terms of asymptotic performance. This is especially true with high-capacity parametric function approximators, such as deep networks. In this paper, we study how to bridge this gap, by employing uncertainty-aware dynamics models. We propose a new algorithm called probabilistic ensembles with trajectory sampling (PETS) that combines uncertainty-aware deep network dynamics models with sampling-based uncertainty propagation. Our comparison to state-of-the-art model-based and model-free deep RL algorithms shows that our approach matches the asymptotic performance of model-free algorithms on several challenging benchmark tasks, while requiring significantly fewer samples (e.g., 8 and 125 times fewer samples than Soft Actor Critic and Proximal Policy Optimization respectively on the half-cheetah task).
translated by 谷歌翻译
Technology advancements in wireless communications and high-performance Extended Reality (XR) have empowered the developments of the Metaverse. The demand for Metaverse applications and hence, real-time digital twinning of real-world scenes is increasing. Nevertheless, the replication of 2D physical world images into 3D virtual world scenes is computationally intensive and requires computation offloading. The disparity in transmitted scene dimension (2D as opposed to 3D) leads to asymmetric data sizes in uplink (UL) and downlink (DL). To ensure the reliability and low latency of the system, we consider an asynchronous joint UL-DL scenario where in the UL stage, the smaller data size of the physical world scenes captured by multiple extended reality users (XUs) will be uploaded to the Metaverse Console (MC) to be construed and rendered. In the DL stage, the larger-size 3D virtual world scenes need to be transmitted back to the XUs. The decisions pertaining to computation offloading and channel assignment are optimized in the UL stage, and the MC will optimize power allocation for users assigned with a channel in the UL transmission stage. Some problems arise therefrom: (i) interactive multi-process chain, specifically Asynchronous Markov Decision Process (AMDP), (ii) joint optimization in multiple processes, and (iii) high-dimensional objective functions, or hybrid reward scenarios. To ensure the reliability and low latency of the system, we design a novel multi-agent reinforcement learning algorithm structure, namely Asynchronous Actors Hybrid Critic (AAHC). Extensive experiments demonstrate that compared to proposed baselines, AAHC obtains better solutions with preferable training time.
translated by 谷歌翻译
Given an untrimmed video and natural language query, video sentence grounding aims to localize the target temporal moment in the video. Existing methods mainly tackle this task by matching and aligning semantics of the descriptive sentence and video segments on a single temporal resolution, while neglecting the temporal consistency of video content in different resolutions. In this work, we propose a novel multi-resolution temporal video sentence grounding network: MRTNet, which consists of a multi-modal feature encoder, a Multi-Resolution Temporal (MRT) module, and a predictor module. MRT module is an encoder-decoder network, and output features in the decoder part are in conjunction with Transformers to predict the final start and end timestamps. Particularly, our MRT module is hot-pluggable, which means it can be seamlessly incorporated into any anchor-free models. Besides, we utilize a hybrid loss to supervise cross-modal features in MRT module for more accurate grounding in three scales: frame-level, clip-level and sequence-level. Extensive experiments on three prevalent datasets have shown the effectiveness of MRTNet.
translated by 谷歌翻译
Traffic accident prediction in driving videos aims to provide an early warning of the accident occurrence, and supports the decision making of safe driving systems. Previous works usually concentrate on the spatial-temporal correlation of object-level context, while they do not fit the inherent long-tailed data distribution well and are vulnerable to severe environmental change. In this work, we propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training. In particular, the text description provides a dense semantic description guidance for the primary context of the traffic scene, while the driver attention provides a traction to focus on the critical region closely correlating with safe driving. CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module. We leverage the attention mechanism in these modules to explore the core semantic cues for accident prediction. In order to train CAP, we extend an existing self-collected DADA-2000 dataset (with annotated driver attention for each frame) with further factual text descriptions for the visual observations before the accidents. Besides, we construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames (named as CAP-DATA) together with labeled fact-effect-reason-introspection description and temporal accident frame label. Based on extensive experiments, the superiority of CAP is validated compared with state-of-the-art approaches. The code, CAP-DATA, and all results will be released in \url{https://github.com/JWFanggit/LOTVS-CAP}.
translated by 谷歌翻译
The Metaverse can be considered the extension of the present-day web, which integrates the physical and virtual worlds, delivering hyper-realistic user experiences. The inception of the Metaverse brings forth many ecosystem services such as content creation, social entertainment, in-world value transfer, intelligent traffic, healthcare. These services are compute-intensive and require computation offloading onto a Metaverse edge computing server (MECS). Existing Metaverse edge computing approaches do not efficiently and effectively handle resource allocation to ensure a fluid, seamless and hyper-realistic Metaverse experience required for Metaverse ecosystem services. Therefore, we introduce a new Metaverse-compatible, Unified, User and Task (UUT) centered artificial intelligence (AI)- based mobile edge computing (MEC) paradigm, which serves as a concept upon which future AI control algorithms could be built to develop a more user and task-focused MEC.
translated by 谷歌翻译
We investigate composed image retrieval with text feedback. Users gradually look for the target of interest by moving from coarse to fine-grained feedback. However, existing methods merely focus on the latter, i.e, fine-grained search, by harnessing positive and negative pairs during training. This pair-based paradigm only considers the one-to-one distance between a pair of specific points, which is not aligned with the one-to-many coarse-grained retrieval process and compromises the recall rate. In an attempt to fill this gap, we introduce a unified learning approach to simultaneously modeling the coarse- and fine-grained retrieval by considering the multi-grained uncertainty. The key idea underpinning the proposed method is to integrate fine- and coarse-grained retrieval as matching data points with small and large fluctuations, respectively. Specifically, our method contains two modules: uncertainty modeling and uncertainty regularization. (1) The uncertainty modeling simulates the multi-grained queries by introducing identically distributed fluctuations in the feature space. (2) Based on the uncertainty modeling, we further introduce uncertainty regularization to adapt the matching objective according to the fluctuation range. Compared with existing methods, the proposed strategy explicitly prevents the model from pushing away potential candidates in the early stage, and thus improves the recall rate. On the three public datasets, i.e., FashionIQ, Fashion200k, and Shoes, the proposed method has achieved +4.03%, + 3.38%, and + 2.40% Recall@50 accuracy over a strong baseline, respectively.
translated by 谷歌翻译
The rapid development of aspect-based sentiment analysis (ABSA) within recent decades shows great potential for real-world society. The current ABSA works, however, are mostly limited to the scenario of a single text piece, leaving the study in dialogue contexts unexplored. In this work, we introduce a novel task of conversational aspect-based sentiment quadruple analysis, namely DiaASQ, aiming to detect the sentiment quadruple of target-aspect-opinion-sentiment in a dialogue. DiaASQ bridges the gap between fine-grained sentiment analysis and conversational opinion mining. We manually construct a large-scale, high-quality Chinese dataset and also obtain the English version dataset via manual translation. We deliberately propose a neural model to benchmark the task. It advances in effectively performing end-to-end quadruple prediction and manages to incorporate rich dialogue-specific and discourse feature representations for better cross-utterance quadruple extraction. We finally point out several potential future works to facilitate the follow-up research of this new task. The DiaASQ data is open at https://github.com/unikcc/DiaASQ
translated by 谷歌翻译
Effective force modulation during tissue manipulation is important for ensuring safe robot-assisted minimally invasive surgery (RMIS). Strict requirements for in-vivo distal force sensing have led to prior sensor designs that trade off ease of manufacture and integration against force measurement accuracy along the tool axis. These limitations have made collecting high-quality 3-degree-of-freedom (3-DoF) bimanual force data in RMIS inaccessible to researchers. We present a modular and manufacturable 3-DoF force sensor that integrates easily with an existing RMIS tool. We achieve this by relaxing biocompatibility and sterilizability requirements while utilizing commercial load cells and common electromechanical fabrication techniques. The sensor has a range of +-5 N axially and +-3 N laterally with average root mean square errors(RMSEs) of below 0.15 N in all directions. During teleoperated mock tissue manipulation tasks, a pair of jaw-mounted sensors achieved average RMSEs of below 0.15 N in all directions. For grip force, it achieved an RMSE of 0.156 N. The sensor has sufficient accuracy within the range of forces found in delicate manipulation tasks, with potential use in bimanual haptic feedback and robotic force control. As an open-source design, the sensors can be adapted to suit additional robotic applications outside of RMIS.
translated by 谷歌翻译
改善人与人之间的互动性和互连性是元视频的亮点之一。荟萃分析依赖于核心方法,数字孪生,这是将物理世界对象,人,动作和场景复制到虚拟世界中的一种手段。能够在实时和移动性的情况下访问与物理世界相关的场景和信息,对于为所有用户开发高度可访问,互动和互连体验至关重要。这种开发使来自其他位置的用户可以访问有关另一个位置发生的事件的高质量现实世界和最新信息,并与他人进行超相互交流的社交。然而,由于虚拟世界图形的数据大小以及对低延迟传输的需求,因此其他人从元评估中产生的持续,平稳的更新是一项具有挑战性的任务。随着移动增强现实(MAR)的开发,用户也可以通过高度交互方式(即使在移动性下)通过元视频进行交互。因此,在我们的工作中,我们考虑了一个环境,其中包括移动车辆互联网(IOV)的用户,并通过无线通信从Metaverse Service Provister Pasting Stations(MSPCSS)下载实时虚拟世界更新。我们设计了一个具有多个单元站的环境,其中将在细胞站之间交换用户虚拟世界图形下载任务。由于传输延迟是在移动性下接收虚拟世界更新的主要关注点,因此我们的工作旨在分配系统资源,以最大程度地减少用户在车辆中使用的总时间,以便从单元站下载其虚拟世界场景。我们利用深度强化学习并评估不同环境配置下算法的性能。我们的工作提供了启用AI支持的6G通信的元视体的用例。
translated by 谷歌翻译
COVID-19大流行刺激的快速数字化导致了更多的网络犯罪。现在,恶意软件即服务是网络犯罪分子的蓬勃发展的业务。随着恶意软件活动的激增,对于网络辩护人来说,更多地了解他们手头的恶意软件样本,因为这些信息可以极大地影响他们在违规过程中的下一步行动。最近,研究人员展示了如何通过将恶意软件二进制文件转换为灰度图像,然后通过神经网络进行分类来完成恶意软件家庭分类。但是,大多数工作着重于研究不同神经网络体系结构对分类性能的影响。在去年,研究人员表明,通过自我监督学习来增强监督学习可以提高绩效。甚至最近,Data2Vec被提议为一种训练神经网络的情态自我监督框架。在本文中,我们介绍了Binimg2Vec,这是一个培训恶意软件二进制图像分类器的框架,该框架既包含了自我监督的学习和监督学习,又可以产生一个模型,该模型始终优于仅通过监督学习而受过培训的模型。我们能够在分类性能上提高4%,并在多次运行中降低0.5%的性能差异。我们还展示了我们的框架如何产生可以很好地聚类的嵌入,从而促进模型的解释。
translated by 谷歌翻译